Recent progress on vision-language foundation models have brought significant advancement to building general-purpose robots. By using the pre-trained models to encode the scene and instructions as inputs for decision making, the instruction-conditioned policy can generalize across different objects and tasks. While this is encouraging, the policy still fails in most cases given an unseen task or environment. To adapt the policy to unseen tasks and environments, we explore a new paradigm on leveraging the pre-trained foundation models with Self-PLAY and Self-Describe (SPLAYD). When deploying the trained policy to a new task or a new environment, we first let the policy self-play with randomly generated instructions to record the demonstrations. While the execution could be wrong, we can use the pre-trained foundation models to accurately self-describe (i.e., re-label or classify) the demonstrations. This automatically provides new pairs of demonstration-instruction data for policy fine-tuning. We evaluate our method on a broad range of experiments with the focus on generalization on unseen objects, unseen tasks, unseen environments, and sim-to-real transfer. We show SPLAYD improves baselines by a large margin in all cases. Our project page is available at https://geyuying.github.io/SPLAYD/
translated by 谷歌翻译
跳舞视频retargeting旨在综合传输从源视频到目标人物的舞蹈移动的视频。以前的工作需要收集有几分钟的目标人物,以训练个性化模型的数千帧。但是,训练有素的模型只能生成同一个人的视频。为了解决限制,最近的工作解决了几次跳舞的视频retargeting,这将通过利用其中几帧来综合看不见的人的视频。在实践中,给出了一个人的几个框架,这些工作只是将它们视为一批没有时间相关性的单个图像,从而产生了低视觉质量的时间上不连贯的跳舞视频。在这项工作中,我们将一个人的一些框架模拟了一系列跳舞的移动,其中每个移动包含两个连续帧,以提取这个人的外观模式和时间动态。我们提出了通过跳舞移动的合成优化模型的初始化,从而利用时间感知的元学习,使得元训练模型可以朝着增强的视觉质量和加强不良人员的时间稳定性地调整。很少的框架。广泛的评估显示了我们的方法的大量优势。
translated by 谷歌翻译
预先培训用于学习可转让的视频文本表示的模型,以近年来引起了很多关注。以前的主导作品主要采用两个独立的编码器来有效检索,但忽略视频和文本之间的本地关联。另一种研究使用联合编码器与文本交互视频,但是由于每个文本视频对需要馈送到模型中的低效率。在这项工作中,我们能够通过新颖的借口任务进行微粒视频文本交互,以便通过新颖的借口任务进行检索,称为多项选择题(MCQ),其中参数模块BridgeFormer培训以接受由此构建的“问题”。文本功能通过诉诸视频功能。具体来说,我们利用了文本的丰富语义(即,名词和动词)来构建问题,可以培训视频编码器以捕获更多区域内容和时间动态。以问题和答案的形式,可以正确建立本地视频文本功能之间的语义关联。 BridgeFormer能够删除下游检索,只有两个编码器渲染高效且灵活的模型。我们的方法在具有不同实验设置(即零拍摄和微调)的五个数据集中,在五个数据集中优于最先进的方法,包括不同的实验设置(即零拍摄和微调),包括HOWTO100M(一百万个视频)。我们进一步开展零射击动作识别,可以作为视频到文本检索,我们的方法也显着超越了其对应物。作为额外的好处,我们的方法在单模下游任务中实现了竞争力,在单模下游任务上具有更短的预训练视频,例如,使用线性评估的动作识别。
translated by 谷歌翻译
最近用于时尚地标检测的先进方法主要由培训大规模时装数据集上的卷积神经网络驱动,这具有大量注释的地标。然而,在现实世界应用中获得这种大规模注释是困难和昂贵的,因此需要从少量标记数据中概括井的模型。我们调查了几次时尚地标检测的这个问题,其中只有少数标记的样品可用于看不见的任务。这项工作提出了一种通过元学习命名为Metabloth的小说框架,能够仅使用少数注释的样品来学习密集时尚地标检测的未经调整任务。与以前的元学习工作相比,专注于解决“N-Way K-Shot”任务,其中每个任务通过用每个类的K注释样本进行训练预测n个类(N是固定的所有所看到的和看不见的任务),a MetaCloth中的任务在使用K样品中检测不同服装类别的不同地标,其中N在任务中变化,因为不同的服装类别通常具有各种地标。因此,在Metabloth中的不同看见和看不见的任务的参数数量是各种各样的。 MetaCloth精心设计用于动态生成不同任务的不同数量的参数,并从一些带有良好的初始化参数的一些注释样本中学习更广泛的特征提取网络。广泛的实验表明,Metabloth优先于其对应物的大幅度。
translated by 谷歌翻译
As natural language processing (NLP) for gender bias becomes a significant interdisciplinary topic, the prevalent data-driven techniques such as large-scale language models suffer from data inadequacy and biased corpus, especially for languages with insufficient resources such as Chinese. To this end, we propose a Chinese cOrpus foR Gender bIas Probing and Mitigation CORGI-PM, which contains 32.9k sentences with high-quality labels derived by following an annotation scheme specifically developed for gender bias in the Chinese context. Moreover, we address three challenges for automatic textual gender bias mitigation, which requires the models to detect, classify, and mitigate textual gender bias. We also conduct experiments with state-of-the-art language models to provide baselines. To our best knowledge, CORGI-PM is the first sentence-level Chinese corpus for gender bias probing and mitigation.
translated by 谷歌翻译
We present Second Thought, a new learning paradigm that enables language models (LMs) to re-align with human values. By modeling the chain-of-edits between value-unaligned and value-aligned text, with LM fine-tuning and additional refinement through reinforcement learning, Second Thought not only achieves superior performance in three value alignment benchmark datasets but also shows strong human-value transfer learning ability in few-shot scenarios. The generated editing steps also offer better interpretability and ease for interactive error correction. Extensive human evaluations further confirm its effectiveness.
translated by 谷歌翻译
Three-dimensional (3D) ultrasound imaging technique has been applied for scoliosis assessment, but current assessment method only uses coronal projection image and cannot illustrate the 3D deformity and vertebra rotation. The vertebra detection is essential to reveal 3D spine information, but the detection task is challenging due to complex data and limited annotations. We propose VertMatch, a two-step framework to detect vertebral structures in 3D ultrasound volume by utilizing unlabeled data in semi-supervised manner. The first step is to detect the possible positions of structures on transverse slice globally, and then the local patches are cropped based on detected positions. The second step is to distinguish whether the patches contain real vertebral structures and screen the predicted positions from the first step. VertMatch develops three novel components for semi-supervised learning: for position detection in the first step, (1) anatomical prior is used to screen pseudo labels generated from confidence threshold method; (2) multi-slice consistency is used to utilize more unlabeled data by inputting multiple adjacent slices; (3) for patch identification in the second step, the categories are rebalanced in each batch to solve imbalance problem. Experimental results demonstrate that VertMatch can detect vertebra accurately in ultrasound volume and outperforms state-of-the-art methods. VertMatch is also validated in clinical application on forty ultrasound scans, and it can be a promising approach for 3D assessment of scoliosis.
translated by 谷歌翻译
To reproduce the success of text-to-image (T2I) generation, recent works in text-to-video (T2V) generation employ large-scale text-video dataset for fine-tuning. However, such paradigm is computationally expensive. Humans have the amazing ability to learn new visual concepts from just one single exemplar. We hereby study a new T2V generation problem$\unicode{x2014}$One-Shot Video Generation, where only a single text-video pair is presented for training an open-domain T2V generator. Intuitively, we propose to adapt the T2I diffusion model pretrained on massive image data for T2V generation. We make two key observations: 1) T2I models are able to generate images that align well with the verb terms; 2) extending T2I models to generate multiple images concurrently exhibits surprisingly good content consistency. To further learn continuous motion, we propose Tune-A-Video with a tailored Sparse-Causal Attention, which generates videos from text prompts via an efficient one-shot tuning of pretrained T2I diffusion models. Tune-A-Video is capable of producing temporally-coherent videos over various applications such as change of subject or background, attribute editing, style transfer, demonstrating the versatility and effectiveness of our method.
translated by 谷歌翻译
We introduce \textsc{PoliteRewrite} -- a dataset for polite language rewrite which is a novel sentence rewrite task. Compared with previous text style transfer tasks that can be mostly addressed by slight token- or phrase-level edits, polite language rewrite requires deep understanding and extensive sentence-level edits over an offensive and impolite sentence to deliver the same message euphemistically and politely, which is more challenging -- not only for NLP models but also for human annotators to rewrite with effort. To alleviate the human effort for efficient annotation, we first propose a novel annotation paradigm by a collaboration of human annotators and GPT-3.5 to annotate \textsc{PoliteRewrite}. The released dataset has 10K polite sentence rewrites annotated collaboratively by GPT-3.5 and human, which can be used as gold standard for training, validation and test; and 100K high-quality polite sentence rewrites by GPT-3.5 without human review. We wish this work (The dataset (10K+100K) will be released soon) could contribute to the research on more challenging sentence rewrite, and provoke more thought in future on resource annotation paradigm with the help of the large-scaled pretrained models.
translated by 谷歌翻译
Summary quality assessment metrics have two categories: reference-based and reference-free. Reference-based metrics are theoretically more accurate but are limited by the availability and quality of the human-written references, which are both difficulty to ensure. This inspires the development of reference-free metrics, which are independent from human-written references, in the past few years. However, existing reference-free metrics cannot be both zero-shot and accurate. In this paper, we propose a zero-shot but accurate reference-free approach in a sneaky way: feeding documents, based upon which summaries generated, as references into reference-based metrics. Experimental results show that this zero-shot approach can give us the best-performing reference-free metrics on nearly all aspects on several recently-released datasets, even beating reference-free metrics specifically trained for this task sometimes. We further investigate what reference-based metrics can benefit from such repurposing and whether our additional tweaks help.
translated by 谷歌翻译